Collection of wires through which data is transmitted from one computer to another. They all contain two parts, address bus and data bus. Address bus sends info about the data on the bus and the data bus sends the actually data being processed. Size of the bus determines the amount of data which can be processed on it. If it is a 16 bit bus, this transmits only 16 bits of data. Each bus has a clock rate in Megahertz with the higher ratings being faster bus speeds.
A piece of computer hardware which stores, writes and retrieves data. They use a method called caching to store information for faster retrieval later. They use different technologies to transmit data. Serial drives allow for 150 MBs transfer rate, while SCSI supports 80 MBs and parallel at 133MBs or lower.
A device which allows for information to be read, written, and stored onto the medium of transfer being floppy disks. There are a couple different floppy sizes but are slowly fading out in the market due to their inability to store vast amounts of data.
Compact Disk-Read Only Memory is an optical device which also reads, writes and stores data onto CDs. They can store a lot of data on disks by a process known as burning software.
Similar to a desktop in your office, this electronic desktop can hold your documents, files and folders of information. With your desktop, you can manipulate, copy, or update your files just as you would sitting at your desk, only with this one, you look at a virtual one on your computer screen.
Single Inline Memory Module is a circuit board with memory chips are soldered on it. This stick of memory has a 32 bit wide bus and is typically not used anymore because of its age in the technology world.
Dual Inline Memory Module is the next step up in memory architecture which doubles the speed of access to memory. What was 32 bit bus is now 64 bits wide due to the increase in the front side bus that the newer Intel processors were built on. This means that you are able to install single modules instead of having to pair sticks like you do with SIMM RAM.
Random Access Memory is a kind of computer memory in which bytes of data can be accessed at random times. These bytes, eight bits to a byte are stored on microchips and are soldered together to create larger networks of memory. Two different categories for RAM are: Static and dynamic random access memory. The first one, static ram, is a type of volatile computer memory, which means the memory must have a constant flow of power. This ram is built in small storage levels because of its architecture. Six transistors are needed to store one bit of data contained on one of many cells microchip has. DRAM only requires the use of one capacitor and transistor on each cell on the microchip. This ram is also volatile and has a higher density. Since DRAM modules are made with capacitors, the charge wears out and must be recharged to keep the contents stored.
Read Only Memory is a method of storing data on a computer with the ability to look at its contents and is usually permanently stored data. ROM is installed on firmware, or hardware closely associated with software. This type of memory is non-volatile because it does not require power for its contents to remain. One such example of ROM is the BIOS for your computer which holds all the information on about your computer even if power is taken away.
The binary number system is a system in which computers communicate within themselves and other devices directly connected to them. These numbers take on form of one’s and zero’s due to the way a transistor works in an electrical system, on being one and off being zero. A bunch of these numbers can be strung together to represent different information such as text, pictures and videos.
This is a numbering system similiar to the binary system however, this coding involves numbers as well as letters to represent the same information or data. Hex form is easier to understand since it takes less to code the letter 'A' for example instead of the binary defined 'A' with a bunch of ones and zeros.
This a four set number system that defines a computer internet protocal address or the address your computer uses to access the World Wide Web. For instance an octet set for a computer might be 192.128.65.3.
A data transmission process where multiple data bits are simultaneously delivered. This includes cables for computer devices such as printers and hard drives which use Parallel Integrated Device Electronics connects.
A way to transmit data one bit at a time. This is found in technologies such as USB , Firewire devices, as well as hard drive cables using a 150 MBs serial transmission.
An interface developed by the Electronics Industry Alliance to connect different serial devices. This cable comes in a 25 pin connection or a 9 pin connection interface. They will both work exactly the same because pc’s only use 9 pins.
Universal Serial bus is a standard for serial transmission of devices up to 12 MBs per second. It has the ability to connect 127 devices to it and is also plug and play compatible. This technology connects peripheral devices such as mice, flash drive, external hard drive etc.
Or port 1394 is another computer standard for serial transmission of data similar to USB but much faster. The 1394a standard supports up to 400 mbs and 1394b standard supports transmission speeds of up to 800 mbs. This is a very good technology to use with digital video cameras. Fast speeds allow you to stream vast amounts of data to your pc in no time at all.
Communications Port one is a standard port for communications with the computer. You connect devices such as printers to this port; however the technology is older and not used nearly as often as a USB port. This technology comes in many forms such as a 25-pin D connector for an older printer.
Integrated Drive Electronics is a set of pins with a plastic housing only around the perimeter of the pins. They are used to connect storage and media devices onto the computer's motherboard. A floppy drive uses this in a 34-pin variation, while DVD-RWs and hard drives use a 40 or 80 pin connection. This is a type of parallel communication with standards now of 100mbs and 133mbs.
Uniform Resource identifier is very similar to URL. This technology is a very compacted string of characters to identify or name a resource which information s gathered from via the internet. This technology is used over a network, WWW, using a defined set of rules and syntax to connect to various information pages. For example, http://www.google.com is a URI that identifies the page search engine Google home page.
Hypertext Markup Language is the main markup language in use for today’s various web pages. This has the description of the text-based information contained in a document. It does this by identifying text as lists, paragraphs, headings, etc. and allows the ability to add interactive forms to a web page through adding images, videos and other objects. HTML is written with a set of rules using tags, or labels surrounded by right and left angle brackets or these < >.
Hypertext Transfer Protocol is a set of rules that govern the display or transfer of information through the World Wide Web. Originally, this protocol was used in the retrieval and viewing of HTML pages. The development of this standard was done by the Internet Engineering Task Force and the World Wide Web Consortium. It makes a request, the client, for a web page and the, server, sends a response or loads the information on the web page being requested.
Extensible Markup Language is another general set of rules used to govern specifics on web pages. It is defined by its own set of tags and has a structure to share data across many different information systems, mostly through the internet. It has the ability to serialize data and also encode documents. It originated from the simple SGML and is human-legible. Through the addition of rules and certain constraints to the language, the SGML is implemented in XML.
Extensible Hypertext Markup Language provides the same characteristics as HTML but also goes by the set of rules defined by XML. HTML is an application of SGML, making XHTML an application of XML languages which provides a more restrictive set of rules than SGML. This form, unlike HTML forms, has the ability to automate processes using standard XML tools, where HTML requires very complex custom parser. XHTML 1.0 became a recommendation for the World Wide Web Consortium on January 26, 2000 and HTML 2.0 is the still ongoing language compatible with the World Wide Web.
This has to deal with computer software standards and documentation, and is used to show wrongfull usage of a particular software feature. This is usually because the feature has become obsolete by newer/better versions. This depreciated value still works in the newer software, but may give error messages or warnings recommending a different practice.
In markup languages, an HTML element dictates the structure of an HTML document and the way it is arranged, (hierarchically), in content. It is an SGML element that meets certain requirements of one or more HTML Document Type Definitions. Elements can represent headings, links, paragraphs, embedded media, lists, and other certain structures. It includes both the attributes and content of certain tags used in HTML.
This is the process of checking something, in this case web coding, to make sure it satifies the criterion specified by the World Wide Web Consortium. This process implies one is able to say that their solution is correct and or compliant by a set of rules or standards.
This is organizing a web page into a set of frames with each frame displaying a different HTML document. This pertains to things such as sidebar menus or header that do not move when you scroll up or down a web page. They can be very efficient and convenient for web developers. For example, if an item needed to be added to a sidebar menu using a frame, only one file would need to be changed, not using frameset would require the user to edit each individual page where the sidebar appeared.
This is one method used by W3C validation tool to put more rules and standards on an HTML or XHTML document. Using strict will provide more errors when trying to validate a web page, due to the increase demand of standards it must abide by.
These are qualities of an element with name-value pairs, separated by "=" and are written within the start label of an elemen, after the elements name. It should be enclosed in single or double quotes, but some values in HTML can be left unquoted, not however in XHTML. Leaving unquoted attribute values is considered unsafe and may give you various error messages.
Document Type Definition is primarily used for the expression of a schema through a set of declarations that conform to a particular markup syntax and describes a class, or type or SGML or XML document, in terms of how the syntax is constructed.
This refers to the syntactic validity of strings pertaining to some language, in particular HTML or XHTML documents which is the topic discussed in this section.
In web development it is a stylesheet language used to describe the way a document is written in a markup language and presented. The most common application is to style web sites written in XHTML or HTML.
This is one method used by W3C validation tool to put more rules and standards on an HTML or XHTML document. This standard for validation is easier to get away with small errors as compared to say XHTML strict.
The World Wide Web Consortium. This is a organization that governs standards about how web pages are developed and validated through tools on their website.
Internet Service Provider is an organization or business which provides services along with allowing users internet access. Most ISPs were run by phone companies, but now they have either became individual or groups depending on the amount of funding. Other services besides giving the client access to the web via communications network include internet domain name registration/hosting, web hosting and transit. They can use several different technologies such as Broadband wireless access, cable modem and ISDN. Each kind of organization depends on a different requirement to the internet with some of them including Ethernet, metro-Ethernet, and Gigabit Ethernet.
Transmissions Control Protocol/ Internet Protocol is a set of communications rules that implement how the internet and most commercial networks operate. The Internet Protocol suite is viewed as a set of layers. They include Application layer, Transport Layer, Network/Internet Layer, Data Link layer, and the physical Layer.
Is a set of guidelines or rules that aid in the overseeing an operation on the internet and communications over it. Some of these include TCP/IP, HTTP, and FTP.
Application Service Provider is an organization that hosts software applications on its own servers within its own facilities. Customers rent the use of this application and access it over the internet or private line connection. Such as the Ispace account we use for uploading and displaying our webpages for this class, which is a server housed in the College of Information, only accessible via Information Technology students and the like.
File Transfer Protocol is a way of transferring files over a TCP/IP network. For example when we create ispace pages on our local machine, we then upload the new pages using a FTP server through the college of Information.
Physical Communication Standards, Devices and Network types
||
Peer-to-Peer
This is a communications structure where both sides in the com process share equal responsibility for initiating, maintaining, and terminating a session. Each computer must have the capacity for communication using their own processing, storage, and internet capabilities, as apposed to the client/server model which the client requests information and uses the server’s available resources for the communication process.
This is a means of high speed communications transmission between computers. It refers to the internet access via cable and/or DSL, which can be up to 400 times faster than your analog dial-up connection. The FCC defines a broadband connection of having a minimum upload speed of 200 Kbps.
Is a computer that stores data files and programs shared by users through a high speed computer in the network. Acting like a remote disk drive for users, the file server stores programs and data, while a application server runs programs and processes data.
Local Area Network is a communications network that serves users within a specific geographical area. The servers hold programs and data that are then shared to the clients. They can be in a wide variety of sizes ranging from Intel-based servers to mainframes. Thick and thin client are employed on various networks. A thick client is similar to a normal computer with normal processing capabilities and drives. Thin clients are stripped down machines that retrieve all software and data from the server, usually disk less and floppy only machines.
Is the standard for local area network access method. It is defined by IEEE 802.3 standards. Most PCs and Macs today come with 10/100/1000 Ethernet ports that connect to their motherboard internally. The Gigabit Ethernet is commonly used as a high-speed link between switches and servers.
National Lambda Rail is the first transcontinental Ethernet network that runs over fiver-optic lines to connect high-speed national computer networks in the United States. It is shared by the organization of research institutions that helped develop the network. It is a university-based and owned initiative, in contrast to Internet2, which are university-corporate sponsorships.
This is a model network communications. A client, or user, requests data from a server, for information to be processed. The server contains most of the processing and storage power while the client usually requests these resources when querying for information.
This is main physical connection in a network system. It is the main trunk that connects all other nodes into the internet. Most backbones in use today are fiber optics, but in a smaller network, such as the one in your home, a backbone could simply be your coaxial line from your cable modem, which then hooks up, usually to fiber somewhere down the street for the cable company’s access point.
In a network using Ethernet standard, a device that connects all clients and server together using wired or physical cabling. Most hubs are called active hubs because they regenerate the data bits on the output so that the signal will be just as strong as when it left the signal source.
This is a device that forwards data packets from one network location to another. It architecture inside the device contains routing tables the red each incoming packet and decide how to forward the message. To determine which interface the route uses to direct outgoing packets to, the destination address must be determined first. Most routers have their own built in firewall so each computer does not require one, but is suggested to also have a software firewall for increased protection. They inspect the network address in the packet, and do more processing and add more overhead than a switch or bridge.
This is an electronic or mechanical device that directs the flow of optical or electrical signals from one side to another. Switches with more than two ports, are able to route traffic as well.
This is a computer system on a network that is shared by multiple users. They can come in a variety of sizes from Intel PC based to large IBM mainframes. Large companies have servers on racks in a datacenter and all access to the network must go through the server. They can be classified by their purpose for example a web server might provide space for storage to a consumer using it for their personal website.
This is the wireless standard created by the IEEE. The first specification for 802.11 was introduced in 1997 with a method of access in the unlicensed 2.4GHz band with only 1Mbps speeds. In 1999, 802.11b was introduced which provided access speeds up to 11Mbps. 802.11a was then introduced and produced up to 54Mbps bandwidth at the 5GHz spectrum level. Then, wireless 802.11g was introduced, which provided backwards compatibility with older 802.11b standards using the same 2.4GHz frequency, but allowing a throughput of 54Mbps. The newest system available is the 802.11n, which is expected to be released in 2008 using multiple antennas for speeds up to 100Mbps and more.
stands for Domain Name System server is a dedicated server that provides DNS name resolution in an IP network. DNS servers are used in large companies, in all ISPs and within DNS system in the internet to keep the internet working. They typically do not exist in small business or a home. Is sort of like a phone book for the internet, keeping so-called domain names translating hostnames, i.e. wikipedia.com into the IP address 66.230.200.100 so that network equipment deliver information.
Concurrent Versions System is a system made for Unix which was developed as a chain of scripts pertaining to the shell in the mid 1980’s. This system maintains various changes between a source code version and another, which then stores all the changes made into one specific file. This has the ability to support group entries by merging the different files from each programmer.
Developed by Bram Cohen, is a very popular file sharing service that prevents users from downloading constantly unless they are willing to share in the overall scheme of transmission load on the network. Released in the 2001, it is similar to KaZaA where users download from others without a centralized directory such as the original Napster service. It therefore makes every downloading user an uploading user as well. It breaks up a file into chunks and distributes them throughout several users. In 2004, they estimated BitTorrent files as one third of all internet traffic.
This is an electronic data file. A font is composed of a typeface which contains a group of symbols designed with stylistic unity throughout applications. They can be comprised of punctuation marks, symbols such as mathematical symbols, numerals and of course letters in the alphabet. They have three basic file formats including: bitmap or dots representing a face and size of a symbol through an image, outline fonts use certain formulas to describe each glyph on a page, or a stroke font which uses information to define the profile of a character using specific lines.
These include very closely related symbols, numeric’s, alphabets that have the same typeface design work and come in groups ranging in the hundreds of styles. They do not vary in design structure however; these do vary in width, weight and the way they are displayed on the screen.
meaning “without” is a type of font which does not contain the small features or serifs at the end of the letter or character strokes. They are most commonly used for headlines as apposed to the paragraphs in a text in documents found throughout the web.
This is also a type of font style which does contain small lines at the end of each character. They are said to help by allowing the eye to guide along the lines throughout larger blocks of text faster. This can be related to the size of the font leaving the user’s eyes seeing excess clutter on the screen without these small lines, which enhance the text’s view to its viewer.
Is a typeface which looks similar to the handwriting done mostly by calligraphists. They are distinguished by the slight offset of words contained in different text for example this looks like Italic font. However, there are times when this is wrongly applied to fonts, mostly san-serifs that only distort the way the lettering looks by being slanted.
Fonts are a set of fonts that hold different widths between the characters in text. For example if you type the letter ‘A’ you see it contains more space in-between than say an “I” would in the same allotted spot on the screen next to the same characters.
This is how the font or character is measured. It has a standard of vertical measurement and this would be an example of different fonts with various points, for instance: This is 12-point font, this is 20-point font, takes up more space on the screen when vertically measured compared to the smaller 12-point font.
Is a certain letter, alphabetic, numerical point, or string on display of a text document, image or other that takes the standards of the specific font in question. It has a specific size, angle, and direction to it in a string set. Three parameters that characters have are: P1 and P2 specify the X-coordinate of the start location of a character and contain a two byte value. P3 and P4 specify the Y-coordinate of start location also with a two byte value, and lastly, P5-Pn, if they are in the set, are specific points in the coding of a character in a character set which have the byte value of one.
This is an entry or information composed on a website, where the allotted information is displayed in chronological or reverse chronological order. It can include links, web pages, text, images, and other media. Usually they can be found on subject containing but not limited to: news, food, politics, technology, diaries or even online publications of various books.
This is a collection website that can be edited by anyone person or persons who access it. The first Wiki developer was Ward Cunningham, who described it as “the simplest online database that could possibly work,” with Wikipedia being a very highly known simple database that can be updated and changed on a whim. Although, there are instances where people cannot bypass rules written to wiki pages. For instance, if a student went onto a wiki which defined FSU, one would not be able to write FSU sucks, without a special team who oversees all changes, to change the information back to reliable and true information
Or Network neutrality is a principle or rule acted upon by all residential broadband networks, and possibly all networks. Neutral advocates would include those free of restrictions on the kinds of equipment and modes of communication allowed by broadband providers, along with all the hardware a consumer would use to hook up to this connection via the ISP.
This is a key on the keyboard. In the computing world, forward-slash can separate directories, folders, and files; on the World Wide Web, or on unix based computers.
Introduced first by Bob Berner, this key on the keyboard is used throughout computing. It is found in programming languages such as C and Perl to indicate a certain meaning or action. In MSDOS, the back-slash can be used as a delimiter between various directories and filenames in a certain path expression.
Is a symbol used to abbreviate the word "at" in the ASCII. It is used in different industries for such use in accounting and commercial invoices, for example 8 pizzas @ $8 ea=$64. However, the most recent explosion in the use of this symbol is in E-mail addresses, sgp07d@fsu.edu.
This symbol stands for the world "and" in the ASCII coding system for computers. For example, Steve & Bill is a shorter way to word Steve and Bill but, still have the same meaning and effect.
This symbol in the ASCII, the tilde, which means title or superscription in Latin. It can be used instead of a hyphen in writing, for example 12~15 means 12 to 15.
File System structures/Computer Operating Systems/OS terms
||
OS
Operating System is the underlying software found on your computer that interacts with system hardware. The Operating System has various functions dependent on which one is in application. Standard Operating Systems use 32bit file structures, with newer OS's supporting 64bit structure. Some include: Windows 95,98,2000,ME,NT,XP,Vista. Mac OSx 8,9,10. Linux under KDE 3.0 or SUSE 10.
A piece of software, used to control hardware connected to your computer that tells the Operating System to recognize a specific device. For example, a device driver is shipped with things such as routers, NIC's, mice, keyboards, printers, and modems. Most of the time the Operating System has the default drivers available to run the designated hardware. However, occassions occur when you must have only the manufacture's device driver enclosed in the box.
Simulating more memory than actually exists, allowing the computer to run larger programs or more programs concurrently. It breaks up the program into small segments, called "pages," and brings as many pages from disk into memory that fit into a reserved area for that program. When additional pages are required, it makes room for them by swapping them to disk. It keeps track of pages that have been modified so that they can be retrieved when needed again.(techweb.com)
Storing data in non-contiguous areas on disk. As files are updated, new data are stored in available free space, which may not be contiguous. Fragmented files cause extra head movement, slowing disk accesses. A defrag program is used to rewrite and reorder all the files.(Techweb.com)
Software that creates a virtual environment between computer platforms and its operating system, so a user can use software on a machine which does not have the same OS. Provides isolation between users and processes as well as from each different OS. It can allow multiple users the illusion of having an entire computer, "private" and isolated from other users, on a single machine.(Wikipedia.com)
The File Allocation Table is the simplest form of file system supported by Windows 95, 98 and NT. It contains a table that stays at the very top of the hard disk volume. Two copies of the FAT are kept incase one becomes corrupt or damaged. FAT uses 8.3 constructions when naming files. For instance, steven12.doc. In order for the system's boot files to load correctly, the FAT tables and root directory are stored in a certain location on the disk to be accessed first when the system is started up. There are variations of the technology including FAT16 and FAT32. FAT16 has a fixed number of clusters per partition, so the larger the hard disk size, the larger each cluster becomes. In a 2-GB partition, each cluster takes up 32 KB, even if the file is actually smaller in size. FAT32 was introduced in Windows 95 Service Pack 2. This FAT is just an extension of the FAT16 system with the ability to contain much larger numbers of clusters for each partition.
First introduced in Windows first version of NT, and is very different than its predecessor. It provides for better security on files including; compression, encryption, and quotas on disk space usage. It is the file system used in Windows XP and is not compatible with older OS's on the same computer. With the increased recovery abilities of NTFS, it is not necessary to have a FAT on the partition for backup needs.
A modem is a device used by computers to connect to the internet via a communications port. The port can be on the motherboard or on a peripheral card inserted into a slot. Typically, modem's use a RJ-11 jack and support speeds of up to 56.6 kbs.
Infrared an invisible beam of radiation on the visible light spectrum. It is at the lower end of the spectrum, with wavelengths from 750nm to 1mm. It starts at the end of Microwaves and ends at the beginning of visible light. This technology requires for there to be an unobstructed line of sight between the receiver and transmitter. It is used in many computer devices including audio and video remote controls, as well as use in transmission between computer and mouse/keyboard via Infrared ray.
Radio frequency is a rate of oscillation, (frequency), within a certain range corresponding to electrical signals of alternating current. The range of radio waves is about 3 Hz to 30 GHz which is shown on the radio spectrum. It starts at the bottom 3 Hz- 30 Hz in the Extremely Low Frequency range and eventually ends up at the highest frequency. EHF, Extremely High Frequency, has a range of 30 Ghz- 300 GHz. Radio Frequency is used in computer networks to broadcast communications signals for cellular phone companies, universitys' networks, as well as simple home wireless networks.
This is a destructive program that replicates itself throughout a single computer or even across an entire network, whether wire or wireless. It damages the computer by reproducing, consuming memory resources, and internal disk space within a single computer or by slowing the network bandwidth. The word worm implies an automatic method for reproducing itself in other computers.
A program routine that destroys data when certain conditions are met. This may caused by a reformat of a hard disk or inserting random bits into data files on a certain date. They are called this because they deliver their payload only after a specific latency or when a event occurs, triggering the virus.
This is software code written to infect a computer. It can be buried beneath layers of code in an existing program, once the program is executed, the virus code is activated. This activation triggers the virus to attach a copy of itself to other programs in the system, so deleting the virus it harder to wipe clean and does more damage. Damage to the computer can be as simple as a message pop up, the blue screen of death, or a slow destruction of programs, or even lay dormant for certain dates to occur.
This is E-mail that is not requested by the user. Also known as "unsolicitted commercial e-mail" (UCE), or "junk mail," is mostly used to advertise products and sometimes broadcast some political or social commentary.
Said like the word "fishing," this term pertains to security. It is a scam to steal valuable information from consumers such as their user ID's, passwords, social security numbers, etc. It is done by sending what appear to be official-looking e-mails to consumers from establishments like retail or banking. The e-mail is sent to many consumers asking to update valuable or important information. They then take all your information and use it however they please, since it appears real, you buy into this little scam. Also notice, anyone can do this as long as they have a set of software tools to imitate target websites.
This is a common cause of software to not work or malfunction. If the amount of data written into a buffer exceeds the size of the actual buffer, the additional data gets written into adjacent areas, which could be buffers, constants, flags, or variables. Hackers can exploit buffer overflows by appending executable instructions to the end of data, causing the code to be run after it has entered memory.
This is about the manipulation of individuals. It is a collection of techniques used to manipulate people into performing actions or giving up confidential information. It is similar to a confidence trick or simple fraud, the term typically applies to trickery for information gathering or computer system access and in most cases the attacker never comes face-to-face with the victim.
Stands for Virtual Private Network. This is as the name suggests a private network configured within a public network, in order to take advantage of the larger network's technology. They are widely used by companies to create wide area networks, spanning a large geographic region, to provide connections between offices and mobile users. For years, VPN's have been sharing the same physical network backbone as consumer networks. The consumer's see these VPN's as private national or international networks.
Stands for Pretty Good Privacy, is a data encryption program from PGP Corporation. Published as freeware in 1991, this software was and is widely used around the world for encrypting e-mail messages and securing files. It is available for commercial use and as freeware for personal use. You can access freeware from www.pgpi.org. For e-mail applications, the program sends the key and the encrypted message at the same time. It encrypts the key using a public key algorithm such as RSA and encrypts the message using a secret key algorithm such as IDEA. On the receiving end, the secret key (using public key method) is decrypted first so it can be used to decrypt the message. PGP also supports digital signatures and PKI. PGP was developed by Phil Zimmermann in San Mateo, CA.
Is a chronological sequence of audit records, each of which contains evidence directly pertaining to and resulting from the execution of a business process or system function.
Depending on the technology involved, this word could mean various things. It can mean the accessibility of a system resource in a timely manner, measurement of system's uptime. It is also one of the six fundamental components of information security called Parkerian Hexad. The 'availability' of the wireless network may be poor or excellent depending on weather conditions. The 'Availability' of information on a certain computer topic may also be limited or vast. Lastly, the 'Availability' of your files from the job you just got fired from are most likely not accessible at all, due to companies ability to secure inside information.
This is a password that is not only hard to detect or decrypt by humans, byt also by computers and their programs abilities. Two things which can make your passwords stronger are a larger number of characters, and mixing numeric digits, both upper and lower case characters and letters.
Another part of the Parkerian Hexad, which contains six fundamental components of information security. Confidentiality pertains to restrictions on the accessibilty and dissemination of information.
Authenticity is the correct attribution of origin. This pertains to the authorship of an e-mail message or correct description of information such as a data field thatis properly named. This is also one of the six fundamental components of information security.
This is a primary method for keeping a computer secure from intruders. It does this by blocking or allowing traffic into or out of the private network or user's computer. They can be great for secure internet access or to separate a company's public Web server from its internal network. For example, maybe keep accounting fire walled so not just any employee can come across these findings within the enterprise.
Public Key Infrastructure is a framework for creating a secure method for exchanging information based on public key cryptology. It issues digital certificates that authenticate the identity of organizations and individuals over a public system such as the internet. The certificates are also used to sign messages to ensure that messages have not been tampered with. It can also be used for in-house purposes within an organization. It is implemented by an enterprise for internal use to authenticate employees accessing the network. The enterprise would then be its own certificate authority.
In relation to data, is the quality of completeness, wholeness, correctness, soundness and compliance with the intention of the creators of the data. It is achieved by preventing accidental or deliberate unauthorized insertion, modification, or destruction of data in a database. Is one of the six fundamental components of information security.
In relation to data, is the reversible transformation of data from original (plaintext) to a difficult-to-interpret format (ciphertext) as a mechanism for protecting its integrity, authenticity, and confidentiality. It uses an encryption algorithm and one or more encryption keys.
||